37 research outputs found

    Crises, Creep, and the Surveillance State

    Get PDF

    Targeting Exceptions

    Get PDF
    On May 26, 2020, the forty-fifth President of the United States, Donald Trump, tweeted: “There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed.” Later that same day, Twitter appended an addendum to the President’s tweets so viewers could “get the facts” about California’s mail-in ballot plans and provided a link. In contrast, Facebook’s CEO Mark Zuckerberg refused to take ac- tion on President Trump’s posts. Only when it came to Trump’s support of the Capitol riot did both Facebook and Twitter suspend his account. Differences in attitude between platforms are reflected in their policies toward political advertisements. While Twitter bans such ads, Facebook generally neither bans nor fact-checks them. The dissemination of fake news increases the likelihood of users believing it and passing it on, consequently causing tremendous reputational harm to public representatives, impairing the general public interest, and eroding long-term democracy. Such dissemination depends on online intermediaries that operate platforms, facilitate dissemination, and govern the flow of information by moderating, providing algorithmic recommendations, and targeting third-party advertisers. Should intermediaries bear liability for moderating or failing to moderate? And what about providing algorithmic recommendations and allowing data-driven advertisements directed toward susceptible users? In A Declaration of the Independence of Cyberspace, John Perry Barlow introduced the concept of internet exceptionalism, differentiating it from other existing media. Internet exceptionalism is at the heart of Section 230 of the Communications Decency Act, which provides intermediaries immunity from civil liability for content created by other content providers. Intermediaries like Facebook and Twitter are thereby immune from liability for content created by users and advertisers. However, Section 230 is currently under attack. In 2020, Trump issued an “Executive Order on Preventing Online Censorship” that aimed to limit platforms protections against liability for intermediary-moderated content. Legislative bills seeking to narrow Section 230’s scope soon followed. From another direction, attacks on the overall immunity provided by Section 230 emerged alongside the transition from an internet society to a data- driven algorithmic society—one that changed intermediaries’ scope and role in information dissemination. The changes in the utility of intermediaries requires reevaluation of their duties; that is where this Article steps in. This Article focuses on dissemination of fake news stories as a test case. It maps the roles intermediaries play in the dissemination of fake news by hosting and moderating content, deploying algorithmically personalized recommendations, and using data-driven targeted advertising. The first step toward developing a legal policy for intermediary liability is identifying the different roles intermediaries play in the dissemination of fake news stories. After mapping these roles, this Article examines intermediary liability case law and reflects on internet exceptionalism’s current approach and recent developments. It further examines normative free speech considerations regarding intermediary liability within the context of the different roles they play in fake news dissemination and argues that the liability regime must correspond with the intermediary’s role in dissemination. By targeting exceptions to internet exceptionalism, this Article outlines a nuanced framework for intermediary liability. Finally, it proposes subjecting intermediaries to transparency obligations regarding moderation practices and imposing duties to conduct algorithmic impact assessments as part of consumer protection regulation

    Manipulating, Lying, and Engineering the Future

    Get PDF
    Decision-making should reflect personal autonomy. Yet, it is not entirely an autonomous process. Influencing individuals’ decision-making is not new. It is and always has been the engine that drives markets, politics, and debates. However, in the digital marketplace of ideas the nature of influence is different in scale, scope, and depth. The asymmetry of information shapes a new model of surveillance capitalism. This model promises profits gained by behavioral information collected from consumers and personal targeting. The Internet of Things, Big Data and Artificial Intelligence open a new dimension for manipulation. In the age of Metaverse that would be mediated through virtual spaces and augmented reality manipulation is expected to get stronger. Such manipulation could be performed by either commercial corporations or governments, though this Article primarily focuses on the former, rather than the latter. Surveillance capitalism must depend on technology but also on marketing, as commercial entities push their goods and agendas unto their consumers. This new economic order presents benefits in the form of improved services, but it also has negative consequences: it treats individuals as instruments; it may infringe on individuals’ autonomy and future development; and it manipulates consumers to make commercial choices that could potentially harm their own welfare. Moreover, it may also hinder individuals’ free speech and erode some of the privileges enshrined in a democracy. What can be done to limit the negative consequences of hyper-manipulation in digital markets? Should the law impose limitations on digital influence? If so, how and when? This Article aims to answer these questions in the following manner: First, this Article demonstrates how companies influence decisions by collecting, analyzing, and manipulating information. Understanding the tools of the new economic order is the first step in developing legal policy that mitigates harm. Second, this Article analyzes the concept of manipulation. It explains how digital manipulation differs from traditional commercial influences in scope, scale, and depth. Since there are many forms of manipulation, an outright ban on manipulation is not possible, nor is it encouraged since it could undermine the very basis of free markets and even free speech. As a result, this Article proposes a limiting principle on entities identified in literature as “powerful commercial speakers,” focusing on regulating lies and misrepresentations of these entities. This Article outlines disclosure obligations of contextual elements of advertisements and imposes a duty of avoiding false information. In addition to administrative enforcement of commercial lies and misrepresentations, this Article advocates for a new remedy of compensation for autonomy infringement when a powerful speaker lies or disobeys mandated disclosure on products. Third, this Article proposes a complementary solution for long-term effects of manipulation. This solution does not focus on the manipulation itself, but rather offers limitations on data retention for commercial purposes. Such limitations can mitigate the depth of manipulation and may prevent commercial entities from shackling individuals to their past decisions. Fourth, this Article addresses possible objections to the proposed solutions, by demonstrating that they are not in conflict with the First Amendment, but rather promote freedom of expression

    Evil Nudges

    Get PDF

    Publish, Share, Re-Tweet, and Repeat

    Get PDF
    New technologies allow users to communicate ideas to a broad audience easily and quickly, affecting the way ideas are interpreted and their credibility. Each and every social network user can simply click “share” or “retweet” and automatically republish an existing post and expose a new message to a wide audience. The dissemination of ideas can raise public awareness about important issues and bring about social, political, and economic change. Yet, digital sharing also provides vast opportunities to spread false rumors, defamation, and Fake News stories at the thoughtless click of a button. The spreading of falsehoods can severely harm the reputation of victims, erode democracy, and infringe on the public interest. Holding the original publisher accountable and collecting damages from him offers very limited redress since the harmful expression can continue to spread. How should the law respond to this phenomenon and who should be held accountable? Drawing on multidisciplinary social science scholarship from network theory and cognitive psychology, this Article describes how falsehoods spread on social networks, the different motivations to disseminate them, the gravity of the harm they can inflict, and the likelihood of correcting false information once it has been distributed in this setting. This Article will also describe the top-down influence of social media platform intermediaries, and how it enhances dissemination by exploiting users’ cognitive biases and creating social cues that encourage users to share information. Understanding how falsehoods spread is a first step towards providing a framework for meeting this challenge. The Article argues that it is high time to rethink intermediary duties and obligations regarding the dissemination of falsehoods. It examines a new perspective for mitigating the harm caused by the dissemination of falsehood. The Article advocates harnessing social network intermediaries to meet the challenge of dissemination from the stage of platform design. It proposes innovative solutions for mitigating careless, irresponsible sharing of false rumors. The first solution focuses on a platform’s accountability for influencing user decision-making processes. “Nudges” can discourage users from thoughtless sharing of falsehoods and promote accountability ex ante. The second solution focuses on allowing effective ex post facto removal of falsehoods, defamation, and fake news stories from all profiles and locations where they have spread. Shaping user choices and designing platforms is value laden, reflecting the platform’s particular set of preferences, and should not be taken for granted. Therefore, this Article proposes ways to incentivize intermediaries to adopt these solutions and mitigate the harm generated by the spreading of falsehoods. Finally, the Article addresses the limitations of the proposed solutions yet still concludes that they are more effective than current legal practices

    Content Providers’ Secondary Liability: A Social Network Perspective

    Get PDF
    Recent technological developments allow Internet users to disseminate ideas to a large audience. These technological advances empower individuals and promote important social objectives. However, they also create a setting for speech-related torts, harm, and abuse. One legal path to deal with online defamation turns to the liability of online content providers who facilitate the harmful exchanges. The possibility of bringing them to remove defamatory content and collecting damages from them attracted a great deal of attention in scholarly work, court decisions, and regulations. Different countries established different legal regimes. The United States allows an extensive shield—an overall immunity, as it exempts the liability of content providers in speech torts. This policy is not adopted worldwide. The E.U. directive outlines a “notice-and-takedown” safe haven. Other countries, such as Canada, use common tort law practices. This Article criticizes all of these policy models for being either over or under inclusive. This Article makes the case for a context-specific regulatory regime. It identifies specific characteristics of different content providers with their own unique settings, which call for nuanced legal rules that shall provide an optimal liability regime. To that end, the Article sets forth an innovative taxonomy: it relies on sociological studies premised on network theory and analysis, which is neutral to technological advances. This framework distinguishes between different technological settings based on the strength of social ties formed in each context. The Article explains that the strength of such ties influences the social context of online interactions and flow of information. The strength of ties is the best tool for designing different liability regimes; such ties serve as a proxy for the severity of harm that defamatory online speech might cause, and the social norms that might mitigate or exacerbate speech-related harm. The proposed taxonomy makes it possible to apply a sociological analysis to legal policy and to outline modular rules for content providers’ liability at every juncture. This Article does so while taking into account basic principles of tort law, as well as freedom of speech, reputation, fairness, efficiency, and the importance of promoting innovation

    Content Providers’ Secondary Liability: A Social Network Perspective

    Get PDF
    Recent technological developments allow Internet users to disseminate ideas to a large audience. These technological advances empower individuals and promote important social objectives. However, they also create a setting for speech-related torts, harm, and abuse. One legal path to deal with online defamation turns to the liability of online content providers who facilitate the harmful exchanges. The possibility of bringing them to remove defamatory content and collecting damages from them attracted a great deal of attention in scholarly work, court decisions, and regulations. Different countries established different legal regimes. The United States allows an extensive shield—an overall immunity, as it exempts the liability of content providers in speech torts. This policy is not adopted worldwide. The E.U. directive outlines a “notice-and-takedown” safe haven. Other countries, such as Canada, use common tort law practices. This Article criticizes all of these policy models for being either over or under inclusive. This Article makes the case for a context-specific regulatory regime. It identifies specific characteristics of different content providers with their own unique settings, which call for nuanced legal rules that shall provide an optimal liability regime. To that end, the Article sets forth an innovative taxonomy: it relies on sociological studies premised on network theory and analysis, which is neutral to technological advances. This framework distinguishes between different technological settings based on the strength of social ties formed in each context. The Article explains that the strength of such ties influences the social context of online interactions and flow of information. The strength of ties is the best tool for designing different liability regimes; such ties serve as a proxy for the severity of harm that defamatory online speech might cause, and the social norms that might mitigate or exacerbate speech-related harm. The proposed taxonomy makes it possible to apply a sociological analysis to legal policy and to outline modular rules for content providers’ liability at every juncture. This Article does so while taking into account basic principles of tort law, as well as freedom of speech, reputation, fairness, efficiency, and the importance of promoting innovation

    NFT for Eternity

    Get PDF
    Non-fungible tokens (NFTs) are unique tokens stored on a digital ledger – the blockchain. They are meant to represent unique, non-interchangeable digital assets, as there is only one token with that exact data. Moreover, the information attached to the token cannot be altered as on a regular database. While copies of these digital items are available to all, NFTs are tracked on blockchains to provide the owner with proof of ownership. This possibility of buying and owning digital assets can be attractive to many individuals. NFTs are presently at the stage of early adoption and their uses are expanding. In the future, they could become a fundamental and integral component of tomorrow’s web. NFTs bear the potential to become the engine of speech: as tokenized expressions cannot be altered or deleted, they enable complete freedom of expression, which is not subject to censorship. However, tokenized speech can also bear significant costs and risks, which can threaten individual dignity and the public interest. Anyone can tokenize a defamatory tweet, a shaming tweet, or a tweet that includes personal identifying information and these tokenized expressions can never be deleted or removed from the blockchain, risking permanent damage to the reputations of those involved. Even worse, anyone can tokenize extremist political views, such as alt-right incitement, which could ultimately result in violence against minorities, and infringe on the public interest. To date, literature has focused on harmful speech that appears on dominant digital platforms, but has yet to explore and address the benefits, challenges and risks of tokenized speech. Such speech cannot be deleted from the web in the same way traditional internet intermediaries currently remove content. Thus, the potential influence of NFTs on freedom of expression remains unclear. This Article strives to fill the gap and contribute to literature in several ways. It introduces the idea of owning digital assets by using NFT technology, surveys the main uses of tokenizing digital assets and the benefits of such practices. It aims to raise awareness of the potential of tokenized speech to circumvent censorship and to act as the engine of freedom of expression. Yet it also addresses the challenges and risks posed by tokenized speech. Finally, it proposes various solutions and remedies for the abuse of NFT technology, which may have the potential to perpetuate harmful speech. As we are well aware of the challenges inherent in our proposals for mitigation, this Article also addresses First Amendment objections to the proposed solution

    The Eye in the Sky Delivers (and Influences) What You Buy

    Get PDF

    Speak Out: Verifying and Unmasking Cryptocurrency User Identity

    Get PDF
    Terror attacks pose a serious threat to public safety and national security. New technologies assist these attacks, magnify them, and render them deadlier. The more funding terrorist organizations manage to raise, the greater their capacity to recruit members, organize, and commit terror attacks. Since the September 11, 2001 terror attacks, law enforcement agencies have increased their efforts to develop more anti-terrorism and anti-money laundering regulations, which are designed to block the flow of financing of terrorism and cut off its oxygen. However, at present, most regulatory measures focus on traditional currencies. As these restrictions become more successful, the likelihood that cryptocurrencies will be used as an alternative to fund illicit behaviors grows. Furthermore, the COVID-19 virus and subsequent social distancing guidelines have increased the use of cryptocurrencies for money laundering, material support to terror, and other financial crimes. Cryptocurrencies are a game-changer, significantly affecting market functions like never before and making it easier to finance terrorism and other types of criminal activity. These decentralized and (usually) anonymous currencies facilitate a high volume of transactions, allowing terrorists to engage in extensive fundraising, management, transfer, and spending for illegal activities. As cryptocurrencies gain popularity, the issue of regulating them becomes more urgent. This Article proposes to reform cryptocurrency regulation. It advocates for mandatory obligations directed at cryptocurrency issuers, wallet providers, and exchanges to verify the identity of users on the blockchain. Thus, courts could grant warrants obligating cryptocurrency-issuing companies to unmask the identity of cryptocurrency users when there is probable cause that their activities support terrorism or other money laundering schemes. Such reforms would stifle terrorism and other types of criminal activity financed through cryptocurrencies, curbing harmful activities and promoting national security. In recognition of the legal challenges this solution poses, this Article also addresses substantial objections that might be raised regarding the proposed reforms, such as innovation concerns, First Amendment arguments, and Fourth Amendment protections. It concludes by addressing measures to efficiently promote application of the proposed reforms
    corecore